虚拟现实(VR)技术通常用于娱乐应用中;但是,它也已在我们生活的更严重方面(例如安全)中部署在实际应用中。为了支持在危险行业工作的人们,VR可以确保操作员操纵标准化的任务并协作以应对潜在的风险。令人惊讶的是,很少的研究重点是人们如何在VR环境中进行协作。很少有研究注意运营商在其协作任务中的认知负荷。一旦任务要求变得复杂,许多研究人员将专注于优化相互作用界面的设计,以减少操作员的认知负载。这种方法可能是有价值的。但是,它实际上可以使操作员承受更重要的认知负担,并可能导致更多的错误和协作失败。在本文中,我们提出了一个新的协作VR系统,以支持在VR环境中工作的两个遥控器,以远程控制未螺旋的地面车辆。我们使用比较的实验来评估协作VR系统,重点是在任务和操作总数上花费的时间。我们的结果表明,在两人组中,操作过程中的过程和操作过程中的认知负荷总数明显低于单人组。我们的研究阐明了设计VR系统的启示,以支持有关远程运营商工作流程的协作工作,而不是简单地优化设计成果。
translated by 谷歌翻译
视觉问题回答是自然语言和愿景理解的重要任务。但是,在大多数公众视觉问题上回答了诸如VQA,CLEVR之类的数据集,这些问题是针对给定图像的特定于“她的眼睛是什么颜色?”的人类产生的。人类产生的众包问题相对简单,有时对某些实体或属性有偏见。在本文中,我们介绍了一个基于Image-Chiqa的新问题回答数据集。它包含Internet用户发布的现实查询,并结合了几个相关的开放域图像。系统应确定图像是否可以回答问题。与以前的VQA数据集不同,这些问题是现实世界中独立的查询,这些查询更加各种和无偏见。与先前的图像回程或图像捕获数据集相比,Chiqa不仅衡量了相关性,而且还可以衡量答案性,这需要更细粒度的视力和语言推理。 Chiqa包含超过40k的问题和超过200k的问题图像对。将三级2/1/0标签分配给每个对,指示完美的答案,部分答案和无关紧要。数据分析表明,Chiqa需要对语言和视觉有深入的了解,包括接地,比较和阅读。我们评估了几种最先进的视觉语言模型,例如ALBEF,表明仍然有一个很大的改进奇卡的空间。
translated by 谷歌翻译
本文提出了一种使用视频中心化的变压器在视频中面部聚类的新方法。以前的作品经常采用对比度学习来学习框架级表示,并使用平均池来汇总沿时间维度的特征。这种方法可能无法完全捕获复杂的视频动态。此外,尽管在基于视频的对比学习方面取得了最新进展,但很少有人试图学习一个自我监视的聚类友好的面部表现,从而使视频面部聚集任务受益。为了克服这些局限性,我们的方法采用了变压器直接学习视频级表示,可以更好地反映视频中面部的时间变化属性,而我们还建议一个以视频为中心的自我监督框架来训练变压器模型。我们还调查了以自我为中心视频的面部聚类,这是一个快速出现的领域,尚未在与面部聚类有关的作品中进行研究。为此,我们介绍并发布了第一个名为EasyCom-Clustering的大规模以egipentric视频群集群数据集。我们在广泛使用的大爆炸理论(BBT)数据集和新的easycom群集数据集上评估了我们的建议方法。结果表明,我们以视频为中心的变压器的性能超过了两个基准测试的所有先前最新方法,对面部视频表现出了自我牵强的理解。
translated by 谷歌翻译
高血压是心血管疾病的主要原因和过早死亡。不同的高血压亚型可能在其预后变化,并且需要不同的治疗方法。个人的高血压风险由遗传和环境因素以及它们的相互作用决定。在这项工作中,我们研究了911名非洲裔美国人和1171名欧洲美国人在高血压遗传流行病学网络(Hypergen)Cohort中。我们使用环境变量和基于不同标准选择的遗传功能组建造的高血压子类型分类模型。拟合模型提供了洞察高血压亚型的遗传景观,这可能有助于未来的个性化诊断和治疗高血压。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译
In this paper, a semantic communication framework for image transmission is developed. In the investigated framework, a set of servers cooperatively transmit images to a set of users utilizing semantic communication techniques. To evaluate the performance of studied semantic communication system, a multimodal metric is proposed to measure the correlation between the extracted semantic information and the original image. To meet the ISS requirement of each user, each server must jointly determine the semantic information to be transmitted and the resource blocks (RBs) used for semantic information transmission. We formulate this problem as an optimization problem aiming to minimize each server's transmission latency while reaching the ISS requirement. To solve this problem, a value decomposition based entropy-maximized multi-agent reinforcement learning (RL) is proposed, which enables servers to coordinate for training and execute RB allocation in a distributed manner to approach to a globally optimal performance with less training iterations. Compared to traditional multi-agent RL, the proposed RL improves the valuable action exploration of servers and the probability of finding a globally optimal RB allocation policy based on local observation. Simulation results show that the proposed algorithm can reduce the transmission delay by up to 16.1% compared to traditional multi-agent RL.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
With the development of technology and sharing economy, Airbnb as a famous short-term rental platform, has become the first choice for many young people to select. The issue of Airbnb's pricing has always been a problem worth studying. While the previous studies achieve promising results, there are exists deficiencies to solve. Such as, (1) the feature attributes of rental are not rich enough; (2) the research on rental text information is not deep enough; (3) there are few studies on predicting the rental price combined with the point of interest(POI) around the house. To address the above challenges, we proposes a multi-source information embedding(MSIE) model to predict the rental price of Airbnb. Specifically, we first selects the statistical feature to embed the original rental data. Secondly, we generates the word feature vector and emotional score combination of three different text information to form the text feature embedding. Thirdly, we uses the points of interest(POI) around the rental house information generates a variety of spatial network graphs, and learns the embedding of the network to obtain the spatial feature embedding. Finally, this paper combines the three modules into multi source rental representations, and uses the constructed fully connected neural network to predict the price. The analysis of the experimental results shows the effectiveness of our proposed model.
translated by 谷歌翻译
Domain adaptive detection aims to improve the generalization of detectors on target domain. To reduce discrepancy in feature distributions between two domains, recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning. However, they neglect the relationship between multiple granularities and different features in alignment, degrading detection. Addressing this, we introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning. The key is to encode the dependencies across different granularities including pixel-, instance-, and category-levels simultaneously to align two domains. Specifically, based on pixel-level features, we first develop an omni-scale gated fusion (OSGF) module to aggregate discriminative representations of instances with scale-aware convolutions, leading to robust multi-scale detection. Besides, we introduce multi-granularity discriminators to identify where, either source or target domains, different granularities of samples come from. Note that, MGA not only leverages instance discriminability in different categories but also exploits category consistency between two domains for detection. Furthermore, we present an adaptive exponential moving average (AEMA) strategy that explores model assessments for model update to improve pseudo labels and alleviate local misalignment problem, boosting detection robustness. Extensive experiments on multiple domain adaption scenarios validate the superiority of MGA over other approaches on FCOS and Faster R-CNN detectors. Code will be released at https://github.com/tiankongzhang/MGA.
translated by 谷歌翻译